Mean-square contractivity of stochasticϑ-methods
نویسندگان
چکیده
منابع مشابه
Mean square convergence analysis for kernel least mean square algorithm
In this paper, we study the mean square convergence of the kernel least mean square (KLMS). The fundamental energy conservation relation has been established in feature space. Starting from the energy conservation relation, we carry out the mean square convergence analysis and obtain several important theoretical results, including an upper bound on step size that guarantees the mean square con...
متن کاملMean Square Numerical Methods for Initial Value Random Differential Equations
Randomness may exist in the initial value or in the differential operator or both. In [1,2], the authors discussed the general order conditions and a global convergence proof is given for stochastic Runge-Kutta methods applied to stochastic ordinary differential equations (SODEs) of Stratonovich type. In [3,4], the authors discussed the random Euler method and the conditions for the mean square...
متن کاملMean Square Estimation
The problem of parameter estimation in linear model is pervasive in signal processing and communication applications. It is often common to restrict attention to linear estimators, which simplifies the implementation as well as the mathematical derivations. The simplest design scenario is when the second order statistics of the parameters to be estimated are known and it is desirable to minimiz...
متن کاملLeast Mean Square Algorithm
The Least Mean Square (LMS) algorithm, introduced by Widrow and Hoff in 1959 [12] is an adaptive algorithm, which uses a gradient-based method of steepest decent [10]. LMS algorithm uses the estimates of the gradient vector from the available data. LMS incorporates an iterative procedure that makes successive corrections to the weight vector in the direction of the negative of the gradient vect...
متن کاملLeast Mean Square Algorithm
The Least Mean Square (LMS) algorithm, introduced by Widrow and Hoff in 1959 [12] is an adaptive algorithm, which uses a gradient-based method of steepest decent [10]. LMS algorithm uses the estimates of the gradient vector from the available data. LMS incorporates an iterative procedure that makes successive corrections to the weight vector in the direction of the negative of the gradient vect...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Communications in Nonlinear Science and Numerical Simulation
سال: 2021
ISSN: 1007-5704
DOI: 10.1016/j.cnsns.2020.105671